Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The rapid growth of data-driven technologies and the emergence of various data-sharing paradigms have underscored the need for efficient and stable data exchange protocols. In any such exchange, agents must carefully balance the benefit of acquiring valuable data against the cost of sharing their own. Ensuring stability in these exchanges is essential to prevent agents—or groups of agents—from departing and conducting local, and potentially more favorable, exchanges among themselves. To address this, we study a model in which agents participate in a data exchange. Each agent has an associated payoff for the data acquired from other agents and incurs a cost when sharing its own data. The net utility of an agent is defined as its payoff minus its cost. We adapt the classical notion of core stability from cooperative game theory to the setting of data exchange. A data exchange is said to be core-stable if no subset of agents has an incentive to deviate to a different exchange. We show that a core-stable data exchange is guaranteed to exist when agents have concave payoff functions and convex cost functions, a setting that is typical in domains such as PAC learning and random discovery models. We further show that relaxing either of these conditions can result in the nonexistence of core-stable data exchanges. We then prove that finding a core-stable data exchange is PPAD-hard, even when the set of potential blocking coalitions is restricted to groups of constant size. To the best of our knowledge, this is the first known PPAD-hardness result for core-like stability guarantees in data economics. Finally, we show that data exchange can be modeled as a balanced n-person game. This immediately yields a pivoting algorithm via Scarf’s theorem from 1967 on the core. We demonstrate through empirical results that this pivoting algorithm performs well in practice.more » « lessFree, publicly-accessible full text available September 18, 2026
-
Federated learning (FL) is a popular collaborative learning paradigm, whereby agents with individual datasets can jointly train an ML model. While higher data sharing improves model accuracy and leads to higher payoffs, it also raises costs associated with data acquisition or loss of privacy, causing agents to be strategic about their data contribution. This leads to undesirable behavior at a Nash equilibrium (NE) such as free-riding, resulting in sub-optimal fairness, data sharing, and welfare. To address this, we design MSHAP, a budget-balanced payment mechanism for FL, that admits Nash equilibria under mild conditions, and achieves reciprocal fairness: where each agent's payoff equals her contribution to the collaboration, as measured by the Shapley share. In addition to fairness, we show that the NE under MSHAP has desirable guarantees in terms of accuracy, welfare, and total data collected. We validate our theoretical results through experiments, demonstrating that MSHAP outperforms baselines in terms of fairness and efficiency.more » « lessFree, publicly-accessible full text available May 1, 2026
-
Metallic hydrogen (MH) has been predicted to be metastable, a high temperature superconductor, and a powerful rocket propellant. If true, MH could have an enormous impact on society. We have produced MH in a diamond anvil cell and studied its metastability. At a temperature of 5 K, the load on the metallic hydrogen was stepwise reduced until the pressure was essentially zero. While turning the load or pressure down, the sample evidently transformed to the molecular phase and escaped; the hole in the gasket containing the MH closed. We were unable to determine this value of the metastability pressure. Metallic hydrogen was not observed to be metastable at zero pressure, with no uncertainty.more » « less
-
The continuous growth of CNN complexity not only intensifies the need for hardware acceleration but also presents a huge challenge. That is, the solution space for CNN hardware design and dataflow mapping becomes enormously large besides the fact that it is discrete and lacks a well behaved structure. Most previous works either are stochastic metaheuristics, such as genetic algorithm, which are typically very slow for solving large problems, or rely on expensive sampling, e.g., Gumbel Softmax-based differentiable optimization and Bayesian optimization. We propose an analytical model for evaluating power and performance of CNN hardware design and dataflow solutions. Based on this model, we introduce a co-optimization method consisting of nonlinear programming and parallel local search. A key innovation in this model is its matrix form, which enables the use of deep learning toolkit for highly efficient computations of power/performance values and gradients in the optimization. In handling power-performance tradeoff, our method can lead to better solutions than minimizing a weighted sum of power and latency. The average relative error of our model compared with Timeloop is as small as 1%. Compared to state-of-the-art methods, our approach achieves solutions with up to 1.7 × shorter inference latency, 37.5% less power consumption, and 3 × less area on ResNet 18. Moreover, it provides a 6.2 × speedup of optimizationmore » « less
-
The EIC Comprehensive Chromodynamics Experiment (ECCE) detector has been designed to address the full scope of the proposed Electron Ion Collider (EIC) physics program as presented by the National Academy of Science and provide a deeper understanding of the quark–gluon structure of matter. To accomplish this, the ECCE detector offers nearly acceptance and energy coverage along with excellent tracking and particle identification. The ECCE detector was designed to be built within the budget envelope set out by the EIC project while simultaneously managing cost and schedule risks. This detector concept has been selected to be the basis for the EIC project detector.more » « lessFree, publicly-accessible full text available April 1, 2026
-
null (Ed.)Using a three-wave longitudinal data set of Mexican-origin adolescents (N = 602, Mage = 12.92, SD = 0.91 at Wave 1), this study examines parallel pathways from early exposure to ethnic discrimination and drug-using peers, separately, to underage drinking status by late adolescence. Negative affect was expected to mediate the link from ethnic discrimination to underage drinking status (the stress-induced pathway), whereas social alcohol expectancy was expected to mediate the link from drug-using peers to underage drinking status (the socialization pathway). Our findings lend support to the stress-induced pathway while controlling for the socialization pathway. For the stress-induced pathway, we found that early ethnic discrimination experiences were related to higher likelihood of having engaged in underage drinking by late adolescence through elevated negative affect sustained across adolescence. For the socialization pathway, we found no association between affiliation with drug-using peers in early adolescence and underage drinking status, either directly or indirectly. Present findings highlight the unique role of early ethnic discrimination experiences in underage drinking among Mexican-origin adolescents, over and above the effect of drug-using peers. Alcohol use interventions targeting ethnic minority adolescents should account for adolescents' ethnic discrimination experiences by helping adolescents develop adaptive coping strategies to handle negative affect induced by discrimination (e.g., reappraisal) rather than using alcohol to self-medicate.more » « less
-
Many sequential decision making tasks can be viewed as combinatorial optimiza- tion problems over a large number of actions. When the cost of evaluating an ac- tion is high, even a greedy algorithm, which iteratively picks the best action given the history, is prohibitive to run. In this paper, we aim to learn a greedy heuris- tic for sequentially selecting actions as a surrogate for invoking the expensive oracle when evaluating an action. In particular, we focus on a class of combinato- rial problems that can be solved via submodular maximization (either directly on the objective function or via submodular surrogates). We introduce a data-driven optimization framework based on the submodular-norm loss, a novel loss func- tion that encourages the resulting objective to exhibit diminishing returns. Our framework outputs a surrogate objective that is efficient to train, approximately submodular, and can be made permutation-invariant. The latter two properties al- low us to prove strong approximation guarantees for the learned greedy heuristic. Furthermore, our model is easily integrated with modern deep imitation learning pipelines for sequential prediction tasks. We demonstrate the performance of our algorithm on a variety of batched and sequential optimization tasks, including set cover, active learning, and data-driven protein engineering.more » « less
An official website of the United States government

Full Text Available